275 research outputs found

    Dependence of ground state energy of classical n-vector spins on n

    Full text link
    We study the ground state energy E_G(n) of N classical n-vector spins with the hamiltonian H = - \sum_{i>j} J_ij S_i.S_j where S_i and S_j are n-vectors and the coupling constants J_ij are arbitrary. We prove that E_G(n) is independent of n for all n > n_{max}(N) = floor((sqrt(8N+1)-1) / 2) . We show that this bound is the best possible. We also derive an upper bound for E_G(m) in terms of E_G(n), for m<n. We obtain an upper bound on the frustration in the system, as measured by F(n), which is defined to be (\sum_{i>j} |J_ij| + E_G(n)) / (\sum_{i>j} |J_ij|). We describe a procedure for constructing a set of J_ij's such that an arbitrary given state, {S_i}, is the ground state.Comment: 6 pages, 2 figures, submitted to Physical Review

    Relative Comparison Kernel Learning with Auxiliary Kernels

    Full text link
    In this work we consider the problem of learning a positive semidefinite kernel matrix from relative comparisons of the form: "object A is more similar to object B than it is to C", where comparisons are given by humans. Existing solutions to this problem assume many comparisons are provided to learn a high quality kernel. However, this can be considered unrealistic for many real-world tasks since relative assessments require human input, which is often costly or difficult to obtain. Because of this, only a limited number of these comparisons may be provided. In this work, we explore methods for aiding the process of learning a kernel with the help of auxiliary kernels built from more easily extractable information regarding the relationships among objects. We propose a new kernel learning approach in which the target kernel is defined as a conic combination of auxiliary kernels and a kernel whose elements are learned directly. We formulate a convex optimization to solve for this target kernel that adds only minor overhead to methods that use no auxiliary information. Empirical results show that in the presence of few training relative comparisons, our method can learn kernels that generalize to more out-of-sample comparisons than methods that do not utilize auxiliary information, as well as similar methods that learn metrics over objects

    Theory and Applications of Robust Optimization

    Full text link
    In this paper we survey the primary research, both theoretical and applied, in the area of Robust Optimization (RO). Our focus is on the computational attractiveness of RO approaches, as well as the modeling power and broad applicability of the methodology. In addition to surveying prominent theoretical results of RO, we also present some recent results linking RO to adaptable models for multi-stage decision-making problems. Finally, we highlight applications of RO across a wide spectrum of domains, including finance, statistics, learning, and various areas of engineering.Comment: 50 page

    Forecasting and Granger Modelling with Non-linear Dynamical Dependencies

    Full text link
    Traditional linear methods for forecasting multivariate time series are not able to satisfactorily model the non-linear dependencies that may exist in non-Gaussian series. We build on the theory of learning vector-valued functions in the reproducing kernel Hilbert space and develop a method for learning prediction functions that accommodate such non-linearities. The method not only learns the predictive function but also the matrix-valued kernel underlying the function search space directly from the data. Our approach is based on learning multiple matrix-valued kernels, each of those composed of a set of input kernels and a set of output kernels learned in the cone of positive semi-definite matrices. In addition to superior predictive performance in the presence of strong non-linearities, our method also recovers the hidden dynamic relationships between the series and thus is a new alternative to existing graphical Granger techniques.Comment: Accepted for ECML-PKDD 201
    corecore